Most people do not interact with Semantic Web data directly. Unless they havethe expertise to understand the underlying technology, they need textual orvisual interfaces to help them make sense of it. We explore the problem ofgenerating natural language summaries for Semantic Web data. This isnon-trivial, especially in an open-domain context. To address this problem, weexplore the use of neural networks. Our system encodes the information from aset of triples into a vector of fixed dimensionality and generates a textualsummary by conditioning the output on the encoded vector. We train and evaluateour models on two corpora of loosely aligned Wikipedia snippets and DBpedia andWikidata triples with promising results.
展开▼